Distributional Learning of Appearance
نویسندگان
چکیده
Opportunities for associationist learning of word meaning, where a word is heard or read contemperaneously with information being available on its meaning, are considered too infrequent to account for the rate of language acquisition in children. It has been suggested that additional learning could occur in a distributional mode, where information is gleaned from the distributional statistics (word co-occurrence etc.) of natural language. Such statistics are relevant to meaning because of the Distributional Principle that 'words of similar meaning tend to occur in similar contexts'. Computational systems, such as Latent Semantic Analysis, have substantiated the viability of distributional learning of word meaning, by showing that semantic similarities between words can be accurately estimated from analysis of the distributional statistics of a natural language corpus. We consider whether appearance similarities can also be learnt in a distributional mode. As grounds for such a mode we advance the Appearance Hypothesis that 'words with referents of similar appearance tend to occur in similar contexts'. We assess the viability of such learning by looking at the performance of a computer system that interpolates, on the basis of distributional and appearance similarity, from words that it has been explicitly taught the appearance of, in order to identify and name objects that it has not been taught about. Our experiment tests with a set of 660 simple concrete noun words. Appearance information on words is modelled using sets of images of examples of the word. Distributional similarity is computed from a standard natural language corpus. Our computation results support the viability of distributional learning of appearance.
منابع مشابه
Text Categorization
Text categorization is the task of assigning predefined categories to natural language text. With the widely used “bag-ofword” representation, previous researches usually assign a word with values that express whether this word appears in the document concerned or how frequently this word appears. Although these values are useful for text categorization, they have not fully expressed the abunda...
متن کاملDistributional category learning by 12-month-old infants: an investigation into the role of prosody and distributional frames
Distributional information is a potential cue for learning syntactic categories. Recent artificial grammar studies demonstrate sophisticated distributional learning by young infants. Here we investigate the possible mechanisms and representations underlying this ability. Does prosody constrain distributional analysis? What specific distributional relations do learners track? Twelve-month-old in...
متن کاملSemantic Coherence Facilitates Distributional Learning of Word Meanings
Computational work has suggested that one could, in principle, learn aspects of word meaning simply from patterns of co-occurrence between words. The extent to which humans can do this distributional learning is an open question – artificial language learning experiments have yielded mixed results, prompting suggestions that distributional cues must be correlated with other cues, such as phonol...
متن کاملDistributional Learning of Vowel Categories Is Supported by Prosody in Infant-Directed Speech
Infants’ acquisition of phonetic categories involves a distributional learning mechanism that operates on acoustic dimensions of the input. However, natural infant-directed speech shows large degrees of phonetic variability, and the resulting overlap between categories suggests that category learning based on distributional clustering may not be feasible without constraints on the learning proc...
متن کاملThe distributional Henstock-Kurzweil integral and measure differential equations
In the present paper, measure differential equations involving the distributional Henstock-Kurzweil integral are investigated. Theorems on the existence and structure of the set of solutions are established by using Schauder$^prime s$ fixed point theorem and Vidossich theorem. Two examples of the main results paper are presented. The new results are generalizations of some previous results in t...
متن کامل